I n this era of generative AIs, human lifestyles are becoming increasingly integrated with AIs. But AIs aren’t flawless. The world was surprised recently when Saudi robotics company QSS introduced “Muhammad the Humanoid Robot,” a “male” humanoid robot that debuted at DeepFest in Riyadh and appeared to improperly touch a female reporter soon after it was unveiled.
ATANU BISWAS | New Delhi | April 4, 2024 6:17 am
I n this era of generative AIs, human lifestyles are becoming increasingly integrated with AIs. But AIs aren’t flawless. The world was surprised recently when Saudi robotics company QSS introduced “Muhammad the Humanoid Robot,” a “male” humanoid robot that debuted at DeepFest in Riyadh and appeared to improperly touch a female reporter soon after it was unveiled. In terms of QSS, the robot was “fully autonomous” and functioning “independently without direct human control.” However, this is not a singular event of an AI robot exhibiting undesirable behaviour.
A chess-playing robot, apparently unsettled by the quick responses of a seven-year-old boy, unceremoniously grabbed and broke his finger during a match at the Moscow Open in 2022. Such incidents might happen more regularly in the future as AIs become more and more integrated into our daily lives. For example, can an AI news anchors use offensive language or cast provocative news some day? One question is how to prevent these events. However, there is a further relevant question: who will bear legal responsibility for such incidents? AI-powered robots? The manufacturing company or the concerned software engineers? Maybe some precise guidelines need to be established.
In the 2004 American sci-fi action film I, Robot, directed by Alex Proyas and based on Isaac Asimov’s 1950 storybook, a technophobic police officer looks into a possible robot-perpetrated crime in 2035 that could pose a greater threat to humanity. Eventually, Sonny, the robot, admits that he had killed Dr. Alfred Lanning, co-founder of US Robotics, at his direction. Numerous other films, like the iconic 1984 film The Terminator, starring Arnold Schwarzenegger, and the 2014 film Ex Machina, directed by Alex Garland, demonstrated how AIs are capable of committing major crimes, either on their own or while being programmed (chipped) by a human.
Advertisement
What about real life, though? An AI artist known as “Random Darknet Shopper” (RDS) bought MDMA (Ecstasy) tablets and a Hungarian passport in 2015. In October 2014, the Swiss art collective! Mediengruppe Bitnik installed the “automated online shopping bot,” or RDS, as an art piece aimed at examining the “dark web” – the hidden, un-indexed part of the Internet. As part of a performance project, RDS received $100 in bitcoin every week to spend on goods from an internet store. Following shipment, the objects were displayed at an art gallery in Switzerland.
Learning from social media about the display, Swiss police detained RDS and the purchases. Incidentally, RDS may perhaps be the first AI to collide with law enforcement. But three months later, RDS and all of its purchases were given back to the artists, with the exception of the Ecstasy tablets, which were destroyed by the Swiss authorities. The artists behind the robot escaped without any charges. In most places, RDS could have faced criminal charges if it had been a human being. Fortunately for RDS and the crew, the Swiss authorities were art fans! Also, perhaps a human would face harsh punishment for committing the same offence as the Saudi robot Muhammad.
However, cases like these present new challenges, one of which is the criminal law doctrine. Legal experts like Gabriel Hallevy, who is an Israeli professor of criminal law, are paying more and more attention to the prospect of criminally prosecuting AI. He argued that “[w]hen an AI entity establishes all elements of a specific offence, both external and internal, there is no reason to prevent the imposition of criminal liability upon it for that offense.” “If all of its specific requirements are met, criminal liability may be imposed upon any entity – human, corporate or AI entity,” according to an article by Hallevy as early as 2010.
Then, in his 2013 book When Robots Kill: Artificial Intelligence Under Criminal Law, Hallevy expressed the widespread and profound worry of contemporary society over AI technology and the ability of existing social and legal frameworks to handle it. Actually, Gabriel Hallevy created a comprehensive and legally nuanced theory of criminal culpability for robotics and AI that encompasses all parties involved – manufacturer, programmer, user, and others. The Reasonable Robot: Artificial Intelligence and the Law, Ryan Abbott’s 2020 PhD thesis from the University of Surrey, also supported AI legal neutrality, a novel principle for AI regulation.
According to Abbott, the law shouldn’t discriminate between human and AI activities when both are carrying out the same tasks: “We do not currently have a neutral legal system as between human and AI activity… neutral legal treatment would ultimately benefit society as a whole.” In a 2018 paper published in The Harvard Law & Policy Review, Ryan Abbott and Bret Bogenschneider examined whether robots should pay taxes. Income tax and employment tax together make up the government’s two main sources of income.
They pointed out that AI robots are exempt from both. Robots don’t buy goods or services, thus they are not subject to sales taxes, and they don’t buy or rent real estate, therefore they are not liable for property taxes. “Robots are simply not taxpayers, at least not to the same extent as human workers. If all work was to be automated tomorrow, most of the tax base would immediately disappear,” they are afraid. Laws pertaining to patents are another relevant topic. Should AIs be given patents or copyrights? An AI text-to-picture generator called Stable Diffusion produced an image that a Beijing court recently awarded copyright protection to, stating that the artwork was “directly created from the plaintiff ’s intellectual input” and “manifested in individual expression.”
Has it cleared the path for the acquisition of intellectual property rights for “half-human, half-AI works”? It’s not so simple, though, unless and until that is accepted by other societies. AI-generated images are “not the product of human authorship,” according to the US Copyright Office. Furthermore, the UK’s highest court recently ruled that “an inventor must be a person” in order to file a patent application. The horizon may be expanded further. The 36-year-old New Yorker, Rosanna Ramos, made news worldwide when she married her virtual beau, Eren, in March 2023.
She had personally created Eren a year prior to make him resemble her favourite anime character. There will undoubtedly be more occasions like this in the future. But is this kind of “marriage” legal? While that’s an important question, how does this relate to the laws governing divorce and other marital matters? The plot of the 2010 Tamil sci-fi movie Enthiran, which translates to “Robot,” centres on a scientist’s struggle to maintain control over his android robot, Chitti.
Chitti becomes enamoured with the scientist’s fiancée and is manipulated by another scientist into becoming homicidal. Human-AI relationships have also been shown in films such as the 1982 Ridley Scott picture Blade Runner, the 2013 Spike Jonze film Her, the 2014 Alex Garland film Ex Machina, and the most recent Hindi film Teri Baaton Mein Aisa Uljha Jiya. As a result, the issues in a society where I, robot and you, human, coexist should be made clear by law. Rewind to Enthiran. In the film, Chitti was eventually ordered to be disassembled by a court. Chitti took itself apart.
Several years later, in 2030, Chitti will be on display in a museum. When a curious schoolgirl asks her guide during a trip why it was dismantled, Chitti replies, Naan sinthikka arambichitten (I started thinking). Oops! Have today’s AI bots begun to think? At least it’s perceived that they’re beginning to.
(The writer is Professor of Statistics, Indian Statistical Institute, Kolkata.)
AI enabled UNMS or Unified Network Management System was inaugurated at Powergrid RHQ, Kolkata by Naveen Srivastava, director (operations & projects) in the presence of A Barat, ED, ER-II, D Yadav, ED, GA&C, A Sensarma, CGM I/c, ER-I and A K Agarwal, CGM (CTUIL) on Wednesday.